Nuclei Kaggle Project

For this project I had to compile the opencv libraries from scratch to avoid the message:

*error: /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/highgui/src/window.cpp:545: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

Uninstalling OpenCV sudo dpkg -r opencv

Install libopencv sudo apt-get purge libopencv*

The instructions to install opencv: https://www.pyimagesearch.com/2016/10/24/ubuntu-16-04-how-to-install-opencv/

Understand the type of image

In [2]:
# Load son modules
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import pandas as pd
from IPython.display import display
import glob, os
import pickle
from keras.preprocessing import image
import tensorflow as tf
from PIL import Image
Using TensorFlow backend.

Let us load one image and its masks

In [14]:
# Create a figure
plt.figure(figsize=(20, 10))

# Load one image and its mask
img = mpimg.imread('00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552/images/00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552.png')
mask = mpimg.imread('00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552/masks/0e548d0af63ab451616f082eb56bde13eb71f73dfda92a03fbe88ad42ebb4881.png')
mask2 = mpimg.imread('00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552/masks/0ea1f9e30124e4aef1407af239ff42fd6f5753c09b4c5cac5d08023c328d7f05.png')


# Plot the image and its mask
ax = plt.subplot(231)
ax.set_title("Image 1")
imgplot = plt.imshow(img)
ax1 = plt.subplot(232)
ax1.set_title("Mask 1")
maskplot = plt.imshow(mask)
ax2 = plt.subplot(234)
ax2.set_title("Image 1")
img2plot = plt.imshow(img)
ax3 = plt.subplot(235)
ax3.set_title("Mask 2")
mask2plot = plt.imshow(mask2)

plt.show()

Now let us build a RLE decode function

The above function requires improvement, feel free to change it.

In [15]:
# decode mask function
def mask_decoder(rle, shapesrci):
    """
    Input
    rle:        Is a string with the "pixel location" "number of pixels" pair
                separated by spaces
    
    shapesrci:   Is the shape of the source image from where this mask was taken
    
    Output
    return: A numpy representation of the mask
    """ 

    # Create a cursor to modify flatten array
    position = rle[::2]
    displace = rle[::-2]
    cursor = dict(zip(position,reversed(displace)))


    # Create a 1d numpy flate array of zeros the size of the mask source image
    mask_img = np.zeros(shapesrci[1] * shapesrci[1])

    # Replace the 1d zeros by the rle using using the cursor
    for key in cursor:
        rset = range(int(key),int(key)  + int(cursor[key]))
        np.put(mask_img, rset, 1)
    
    # Reshape the 1d array into a 2d array.
    mask_img2d = np.reshape(mask_img,(shapesrci[0],shapesrci[0]))
    
    # Arrange the new array to compensate for the flattening
    
    # rotate image
    h,w = mask_img2d.shape[:2]
    M = cv2.getRotationMatrix2D((w/2,h/2),90,1.0)
    dst = cv2.warpAffine(mask_img2d,M,(w,h))
    
    # Flip it
    dst = horizontal_img = cv2.flip( dst, 0 )

    return dst

Let us test the above function

It is important to run the first cell because you read the images used in the below test in that cell.

In [18]:
#### BELOW HERE IS JUST AN EXAMPLE OF USING THE FUCTION ABOVE ####

# Load mask csv file into a pandas dataframe
labels = pd.read_csv('stage1_train_labels.csv')
imgencodedpix = labels.loc[labels['ImageId'] == '00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552']

# Select one two rows, in our example the row 1" and row "2"
print("Run Length Image Representation Mask 1")
i,p = imgencodedpix.loc[0]
display(p)
print("Run Length Image Representation Mask 2")
i2,p2 = imgencodedpix.loc[1]
display(p2)

# Convert the csv rows to python list
rle = p.split(' ')
rle2 = p2.split(' ')

# Collect the shape of the source image of the masks
shapesrci = img.shape
# Called the decode function in the previous cell passing the two parameters
dst = mask_decoder(rle, shapesrci)
dst2 = mask_decoder(rle2, shapesrci)

# Plot the origial image the mask and the decoded mask
plt.figure(figsize=(20, 10))

# First column first row of the figure is the source image
ax = plt.subplot(231)
ax.set_title("Image 1")
imgplot = plt.imshow(img)

# Second column first row of the grid is the mask 1
ax1 = plt.subplot(232)
ax1.set_title("Mask 1")
maskplot = plt.imshow(mask)

# Third column first row in the decoded mask 1
ax4 = plt.subplot(233)
ax4.set_title("Decoded Mask 1")
mask_img2dplot = plt.imshow(dst)

# First column second row repeats the original image
ax2 = plt.subplot(234)
ax2.set_title("Image 1")
img2plot = plt.imshow(img)

# Second column second row is the mask 2
ax3 = plt.subplot(235)
ax3.set_title("Mask 2")
maskplot = plt.imshow(mask2)

# Third column second row is the decoded mask 2
ax5 = plt.subplot(236)
ax5.set_title("Decoded Mask 2")
mask2_img2dplot = plt.imshow(dst2)



print("Mask 1 and Decoded ELR are equal?: {}".format(np.array_equal(mask,dst)))
print("Mask 2 and Decoded ELR are equal?: {}".format(np.array_equal(mask,dst2)))



plt.show()
Run Length Image Representation Mask 1
'6908 1 7161 8 7417 8 7672 9 7928 9 8184 9 8440 9 8696 9 8952 9 9209 8 9465 8 9722 7 9978 7 10235 6 10493 4 10752 1'
Run Length Image Representation Mask 2
'36269 7 36523 11 36778 13 37033 15 37288 17 37543 18 37799 18 38054 19 38310 19 38565 20 38821 20 39077 20 39333 19 39589 19 39845 18 40101 18 40357 17 40614 15 40870 15 41127 13 41384 10 41641 8 41899 4'
Mask 1 and Decoded ELR are equal?: False
Mask 2 and Decoded ELR are equal?: False

Indexing the images

We need to find a way to index the images and their masks so we can easily manipulate them without duplicating them or changing their name. We will use a pandas data frame to read the images names, their paths in the OS to their files and their path to the masks. The pandas dataframe is like an execel spreadsheet and it has its internal index that can be used to label the images in case we do not want to use the full image names.

Also once we index the image once we want to save the pandas dataframe so in the future we do not reindex them but just load the saved dataframe.

Finally if we add images we can delete the saves data frame and that will make the function to re-create it with the new images.

In [19]:
# First we create a structure to start the location of all the images
images_path = []
images_name = []
images_ndim = []
images_masks = []
separator = ','

# Capture Images Data
def load_images():
    
    if not os.path.isfile('training_images.pckl'):
        
        print("Data has not been loaded, load it")
        
        # Create file
        file = open('training_images.pckl', 'w').close()
        
        for path in glob.glob("*/images/*.png"):
            images_path.append(path)
            images_name.append(path.split('/',3)[2])
            images_ndim.append(cv2.imread(path).shape)
            images_masks.append(glob.glob(path.split('/',3)[0]+'/masks/*.png'))           
      
        d = {'Image_Name': images_name,
             'Image_Path': images_path,
             'Image_ndim': images_ndim,
             'Image_masks': images_masks}

        training_images = pd.DataFrame(data=d)
    
        # Save the dataframe in a pickle file
        with open('training_images.pckl', 'wb') as fh:
            pickle.dump(training_images,fh)
    else:
        
        print("Data will be loaded from the pickel file")
        training_images = pickle.load(open( 'training_images.pckl', 'rb' ) )
        
    
    return training_images



### BELOW HERE IS AN EXAMPLE OF HOW TO USE THE FUNCTION ###

# Load image metadata
training_images = load_images()

# Print a sample of the training_images dataframe
display(training_images.head(5))

# Get general information about the images
print("Number of images: {}".format(len(training_images['Image_Name'])))
print("Sizes of the images\n")
s = '\n'
print(s.join(str(x) for x in list(training_images['Image_ndim'].unique())))
print('\n')
avg_width = int(np.mean([ndim[0] for ndim in training_images['Image_ndim'].unique()]))
print("Average image size width: {}".format(avg_width))
avg_high = int(np.mean([ndim[1] for ndim in training_images['Image_ndim'].unique()]))
print("Average image size high: {}\n".format(avg_high))
Data will be loaded from the pickel file
Image_Name Image_Path Image_masks Image_ndim
0 573e1480b500c395f8d3f1800e1998bf553af0d3d43039... 573e1480b500c395f8d3f1800e1998bf553af0d3d43039... [573e1480b500c395f8d3f1800e1998bf553af0d3d4303... (256, 256, 3)
1 4bf6a5ec42032bb8dbbb10d25fdc5211b2fe1ce44b6e57... 4bf6a5ec42032bb8dbbb10d25fdc5211b2fe1ce44b6e57... [4bf6a5ec42032bb8dbbb10d25fdc5211b2fe1ce44b6e5... (256, 256, 3)
2 3b75fc03a1d12b29bd2870eb1f6fdb44174dbd1118dfc1... 3b75fc03a1d12b29bd2870eb1f6fdb44174dbd1118dfc1... [3b75fc03a1d12b29bd2870eb1f6fdb44174dbd1118dfc... (256, 256, 3)
3 2e172afb1f43b359f1f0208da9386aefe97c0c1afe202a... 2e172afb1f43b359f1f0208da9386aefe97c0c1afe202a... [2e172afb1f43b359f1f0208da9386aefe97c0c1afe202... (256, 256, 3)
4 a90401357d50e1376354ae6e5f56a2e4dff3fdb5a4e8d5... a90401357d50e1376354ae6e5f56a2e4dff3fdb5a4e8d5... [a90401357d50e1376354ae6e5f56a2e4dff3fdb5a4e8d... (256, 320, 3)
Number of images: 670
Sizes of the images

(256, 256, 3)
(256, 320, 3)
(360, 360, 3)
(520, 696, 3)
(512, 640, 3)
(1024, 1024, 3)
(260, 347, 3)
(603, 1272, 3)
(1040, 1388, 3)


Average image size width: 536
Average image size high: 700

Normalization

Images need to be normalized before training the network, this means standarize its size, pre-process them. Below add cell with any function your think will help to prepare the images to feed them into the network.

In [21]:
# Rescale the image and transform it to greyscale function
# it returns two images one is a PIL image the 
def path_to_image(img_path):
    
    """
    
    img_path:  Path to an image file
    
    Return:
    img:       reshape greyscale PIL.Image.Image object 2d array
    img_cv2:   reshape 2d array no gray 
    
    """
    # loads RGB image as PIL.Image.Image type using the average size and grayscaling it
    img = image.load_img(img_path, grayscale=True, target_size=(536, 700))
    # Load image and resize using opencv in case you need a different format
    img_cv2 = cv2.resize(cv2.imread(img_path), (700,536), interpolation = cv2.INTER_AREA)
    return img,img_cv2
In [22]:
### BELOW EXAMPLE OF HOW TO USE THE ABOVE FUNCTION ###

# Let us test the tranformation
def plot_norm_images(_from, _to,images):
    
    """
    
    _from:  Pandas series start pointer
    _to:    Pandas series end pointer
    images: Pandas data series with path to images
    
    """
    plt.figure(figsize=(30,30))
    for ii,i in enumerate(images.loc[_from:_to]):
        axel_name = 'ax' + str(ii + 1)
        axel_name = plt.subplot2grid((6,6),
                                 (0,ii),colspan=1)
        axel_name.set_title("Original Image: " + str(ii))
        original_image = image.load_img(i)
        origimgplot = plt.imshow(original_image)
        axel_name2 = 'ax2' + str(ii + 1)
        axel_name2 = plt.subplot2grid((6,6),
                                  (1,ii))
        axel_name2.set_title("Normalize Image: " + str(ii))
        norm_image = path_to_image(i)[0]
        normimgplot = plt.imshow(norm_image, cmap = plt.cm.gray)

    plt.show()

plot_norm_images(0,5,training_images['Image_Path'])
#Let see how the mask looklike
print("First 5 normalize masks of the image {}".format(training_images['Image_Name'].loc[0]))
plot_norm_images(0,5,pd.Series(training_images['Image_masks'].loc[0]))
First 5 normalize masks of the image 573e1480b500c395f8d3f1800e1998bf553af0d3d43039333d33cf37d08f64e5.png

Transfor image to Tensor

The following function convert the reshaped, greyscale image into a normalize tensor to be used by tensorflow. Other functions can be code here to conver the image to a tensor that can be used with keras or cafe.

In [23]:
#Normalize the image pixels between 1 and -1 in prepartion
# to feed it to tensorflow
def tf_preprocess_image(img):
    
    """
    
    img:    3D or 4D image representation greyscale image
    tf_img: Tensor standarized for tensorflow
    
    """
    tf_img = tf.image.per_image_standardization(img)
    return tf_img
In [24]:
### BELOW EXAMPLE CODE TO USE THE ABOVE FUNCTION ###

# Test the normalization to tensor 

i = training_images['Image_Path'].loc[0]
img = path_to_image(i)[0]
tensor = tf_preprocess_image(np.array(img.getdata(),
                np.uint8).reshape(img.size[1], img.size[0], 1))

# Access the tensor to get its information

with tf.Session() as sess:
    out = sess.run(tensor)

# Print the shape of the normalize image and the tensor

print("Normalize image size: {}".format(img.size))
print("Tensor image shape: {}".format(out.shape))


# Plot both the tensor and the image to check if they are the same
ax1 = plt.subplot(121)
ax1.set_title("Normalize Image")
plt.imshow(img,cmap = plt.cm.gray)
ax2 = plt.subplot(122)
ax2.set_title("Tensor flow tensor Image")
plt.imshow(out.reshape(out.shape[0],out.shape[1]),cmap = plt.cm.gray)
plt.show()
   
Normalize image size: (700, 536)
Tensor image shape: (536, 700, 1)

Histogram Approach

As per the agreement our group would investigate the histogram approach. The idea is to see if applying fourier transformation to the histogram of image reveals information about the objects in the image.

After some research I found that method to extract features and to perform segmentation from images using the image histogram based features extraction like HOG or SPACE/FREQUENCY analysis, etc are less efficient that using methods based on Neural Networks. It sounds logical to pursue the NN approach instead of more traditional approaches

Finding the missing masks

This function is not useful, it maybe transform to help with the mask merge function but as it is does not due anything.

By using the dataframe as the indexing structure of the images and their mask blend all the masks with its image to see which ones are missing or overlay.

In [25]:
def overlay_masks(img_path):
    """
    
    img_path:   Path to an image
    
    """ 
    
    print("Image name: {}".format(img_path.split('/')[0]))
    img = path_to_image(img_path)[1]
    plt.imshow(img)

    masks_list = []
    
    # Create a list with all the image masks 
    for m in training_images.loc[training_images['Image_Path'] == img_path]['Image_masks'].values[0]:
        masks_list.append(path_to_image(m)[1])
 
    # Blend one image as an example
    fig = plt.figure(figsize=(20, 20))
    for i in range(len(masks_list)):
        ax = fig.add_subplot(int(len(masks_list)/3),int(len(masks_list)/4),i+1, xticks=[], yticks=[])
        mask = cv2.addWeighted(img,0.7,masks_list[i],0.3,0)
        ax.imshow(mask, cmap='gray')
        ax.set_title('Mask %s' % str(i+1))
    plt.show()

# Plot three samples
for i in [0,4,10]:
    overlay_masks(training_images['Image_Path'].loc[i])
Image name: 573e1480b500c395f8d3f1800e1998bf553af0d3d43039333d33cf37d08f64e5
Image name: a90401357d50e1376354ae6e5f56a2e4dff3fdb5a4e8d50316673b2b8f1f293b
Image name: 0ddd8deaf1696db68b00c600601c6a74a0502caaf274222c8367bdc31458ae7e

Implement the tensorflow unet

We create are trainin, validation, test sets

In [27]:
# Create a training a validation set

# Training set is 80% of the images and 20% left for test
training_set = training_images.sample(frac=0.80)
validation_set = training_images.copy()
validation_set = validation_set.drop(training_images.index[[list(training_set.index)]])

print("Total number of images: {}".format(len(training_images)))
print("Training set images: {}".format(len(training_set)))
print("Validations set images: {}".format(len(validation_set)))
Total number of images: 670
Training set images: 536
Validations set images: 134

We create a generation function to create batches of images to be passed to the NN

In [ ]:
# Generation function
In [29]:
from __future__ import division, print_function
from tf_unet import unet, util, image_util, image_gen


nx = 572
ny = 572

# Create a random image generator
generator = image_gen.GrayScaleDataProvider(nx, ny, cnt=20)
    

print("Image number of channels: {}".format(generator.channels))
print("Image number of classes: {}".format(generator.n_class))

x_test, y_test = generator(1)
fig, ax = plt.subplots(1,2, sharey=True, figsize=(8,4))
ax[0].imshow(x_test[0,...,0], aspect="auto")
ax[1].imshow(y_test[0,...,1], aspect="auto")

plt.show()

# Generate the UNET network
net = unet.Unet(channels=generator.channels, n_class=generator.n_class, layers=3, features_root=16)

# configure trainer optimizer
trainer = unet.Trainer(net, optimizer="momentum", opt_kwargs=dict(momentum=0.2))

# Trained
path = trainer.train(generator, "./unet_trained", training_iters=20, epochs=10, display_step=2)
Image number of channels: 1
Image number of classes: 2
2018-02-17 16:07:31,373 Layers 3, features 16, filter size 3x3, pool size: 2x2
2018-02-17 16:07:32,617 Removing '/home/gaure/Google_Drive/kaggle/training_set/prediction'
2018-02-17 16:07:32,619 Removing '/home/gaure/Google_Drive/kaggle/training_set/unet_trained'
2018-02-17 16:07:32,621 Allocating '/home/gaure/Google_Drive/kaggle/training_set/prediction'
2018-02-17 16:07:32,622 Allocating '/home/gaure/Google_Drive/kaggle/training_set/unet_trained'
2018-02-17 16:07:41,767 Verification error= 83.5%, loss= 0.7072
2018-02-17 16:07:42,321 Start optimization
2018-02-17 16:07:48,294 Iter 0, Minibatch Loss= 0.6395, Training Accuracy= 0.8167, Minibatch error= 18.3%
2018-02-17 16:07:58,392 Iter 2, Minibatch Loss= 0.5702, Training Accuracy= 0.8477, Minibatch error= 15.2%
2018-02-17 16:08:08,429 Iter 4, Minibatch Loss= 0.5543, Training Accuracy= 0.8008, Minibatch error= 19.9%
2018-02-17 16:08:18,505 Iter 6, Minibatch Loss= 0.5236, Training Accuracy= 0.8062, Minibatch error= 19.4%
2018-02-17 16:08:28,590 Iter 8, Minibatch Loss= 0.4208, Training Accuracy= 0.8769, Minibatch error= 12.3%
2018-02-17 16:08:38,628 Iter 10, Minibatch Loss= 0.4888, Training Accuracy= 0.8058, Minibatch error= 19.4%
2018-02-17 16:08:48,680 Iter 12, Minibatch Loss= 0.5053, Training Accuracy= 0.7885, Minibatch error= 21.1%
2018-02-17 16:08:58,762 Iter 14, Minibatch Loss= 0.4306, Training Accuracy= 0.8390, Minibatch error= 16.1%
2018-02-17 16:09:08,974 Iter 16, Minibatch Loss= 0.4232, Training Accuracy= 0.8397, Minibatch error= 16.0%
2018-02-17 16:09:19,014 Iter 18, Minibatch Loss= 0.4143, Training Accuracy= 0.8429, Minibatch error= 15.7%
2018-02-17 16:09:23,358 Epoch 0, Average loss: 0.5088, learning rate: 0.2000
2018-02-17 16:09:31,725 Verification error= 16.5%, loss= 0.4259
2018-02-17 16:09:38,250 Iter 20, Minibatch Loss= 0.3683, Training Accuracy= 0.8618, Minibatch error= 13.8%
2018-02-17 16:09:48,320 Iter 22, Minibatch Loss= 0.4517, Training Accuracy= 0.8033, Minibatch error= 19.7%
2018-02-17 16:09:58,324 Iter 24, Minibatch Loss= 0.4200, Training Accuracy= 0.8193, Minibatch error= 18.1%
2018-02-17 16:10:08,376 Iter 26, Minibatch Loss= 0.4161, Training Accuracy= 0.8180, Minibatch error= 18.2%
2018-02-17 16:10:18,439 Iter 28, Minibatch Loss= 0.3534, Training Accuracy= 0.8490, Minibatch error= 15.1%
2018-02-17 16:10:28,503 Iter 30, Minibatch Loss= 0.3360, Training Accuracy= 0.8493, Minibatch error= 15.1%
2018-02-17 16:10:38,558 Iter 32, Minibatch Loss= 0.3837, Training Accuracy= 0.8035, Minibatch error= 19.7%
2018-02-17 16:10:48,796 Iter 34, Minibatch Loss= 0.2967, Training Accuracy= 0.8506, Minibatch error= 14.9%
2018-02-17 16:10:58,855 Iter 36, Minibatch Loss= 0.2693, Training Accuracy= 0.8610, Minibatch error= 13.9%
2018-02-17 16:11:09,149 Iter 38, Minibatch Loss= 0.4374, Training Accuracy= 0.7728, Minibatch error= 22.7%
2018-02-17 16:11:13,535 Epoch 1, Average loss: 0.3868, learning rate: 0.1900
2018-02-17 16:11:21,901 Verification error= 16.5%, loss= 0.3650
2018-02-17 16:11:28,310 Iter 40, Minibatch Loss= 0.4091, Training Accuracy= 0.8449, Minibatch error= 15.5%
2018-02-17 16:11:38,463 Iter 42, Minibatch Loss= 0.3537, Training Accuracy= 0.7999, Minibatch error= 20.0%
2018-02-17 16:11:48,519 Iter 44, Minibatch Loss= 0.2034, Training Accuracy= 0.8375, Minibatch error= 16.3%
2018-02-17 16:11:58,574 Iter 46, Minibatch Loss= 0.1703, Training Accuracy= 0.9056, Minibatch error= 9.4%
2018-02-17 16:12:08,644 Iter 48, Minibatch Loss= 0.1242, Training Accuracy= 0.9804, Minibatch error= 2.0%
2018-02-17 16:12:18,708 Iter 50, Minibatch Loss= 0.1466, Training Accuracy= 0.9722, Minibatch error= 2.8%
2018-02-17 16:12:28,744 Iter 52, Minibatch Loss= 0.0786, Training Accuracy= 0.9822, Minibatch error= 1.8%
2018-02-17 16:12:38,826 Iter 54, Minibatch Loss= 1.4808, Training Accuracy= 0.2596, Minibatch error= 74.0%
2018-02-17 16:12:48,949 Iter 56, Minibatch Loss= 0.3497, Training Accuracy= 0.8719, Minibatch error= 12.8%
2018-02-17 16:12:59,024 Iter 58, Minibatch Loss= 0.2599, Training Accuracy= 0.8147, Minibatch error= 18.5%
2018-02-17 16:13:03,365 Epoch 2, Average loss: 0.3615, learning rate: 0.1805
2018-02-17 16:13:11,957 Verification error= 16.5%, loss= 0.2465
2018-02-17 16:13:18,463 Iter 60, Minibatch Loss= 0.1853, Training Accuracy= 0.8435, Minibatch error= 15.7%
2018-02-17 16:13:28,487 Iter 62, Minibatch Loss= 0.1947, Training Accuracy= 0.8531, Minibatch error= 14.7%
2018-02-17 16:13:38,839 Iter 64, Minibatch Loss= 0.2652, Training Accuracy= 0.7780, Minibatch error= 22.2%
2018-02-17 16:13:49,185 Iter 66, Minibatch Loss= 0.1697, Training Accuracy= 0.8672, Minibatch error= 13.3%
2018-02-17 16:13:59,365 Iter 68, Minibatch Loss= 0.1382, Training Accuracy= 0.8563, Minibatch error= 14.4%
2018-02-17 16:14:09,445 Iter 70, Minibatch Loss= 0.1904, Training Accuracy= 0.8244, Minibatch error= 17.6%
2018-02-17 16:14:19,545 Iter 72, Minibatch Loss= 0.2443, Training Accuracy= 0.7968, Minibatch error= 20.3%
2018-02-17 16:14:29,586 Iter 74, Minibatch Loss= 0.1394, Training Accuracy= 0.8600, Minibatch error= 14.0%
2018-02-17 16:14:39,610 Iter 76, Minibatch Loss= 0.8909, Training Accuracy= 0.9106, Minibatch error= 8.9%
2018-02-17 16:14:49,702 Iter 78, Minibatch Loss= 0.2431, Training Accuracy= 0.8458, Minibatch error= 15.4%
2018-02-17 16:14:54,052 Epoch 3, Average loss: 0.2960, learning rate: 0.1715
2018-02-17 16:15:02,434 Verification error= 16.5%, loss= 0.2498
2018-02-17 16:15:09,035 Iter 80, Minibatch Loss= 0.1688, Training Accuracy= 0.8439, Minibatch error= 15.6%
2018-02-17 16:15:19,242 Iter 82, Minibatch Loss= 0.2220, Training Accuracy= 0.8154, Minibatch error= 18.5%
2018-02-17 16:15:29,308 Iter 84, Minibatch Loss= 0.1681, Training Accuracy= 0.7982, Minibatch error= 20.2%
2018-02-17 16:15:40,187 Iter 86, Minibatch Loss= 0.1839, Training Accuracy= 0.8833, Minibatch error= 11.7%
2018-02-17 16:15:50,311 Iter 88, Minibatch Loss= 0.2644, Training Accuracy= 0.8169, Minibatch error= 18.3%
2018-02-17 16:16:00,351 Iter 90, Minibatch Loss= 0.1571, Training Accuracy= 0.8566, Minibatch error= 14.3%
2018-02-17 16:16:10,447 Iter 92, Minibatch Loss= 0.1921, Training Accuracy= 0.8367, Minibatch error= 16.3%
2018-02-17 16:16:20,507 Iter 94, Minibatch Loss= 0.1216, Training Accuracy= 0.8809, Minibatch error= 11.9%
2018-02-17 16:16:30,545 Iter 96, Minibatch Loss= 0.1529, Training Accuracy= 0.8272, Minibatch error= 17.3%
2018-02-17 16:16:40,614 Iter 98, Minibatch Loss= 0.2249, Training Accuracy= 0.8026, Minibatch error= 19.7%
2018-02-17 16:16:44,960 Epoch 4, Average loss: 0.2180, learning rate: 0.1629
2018-02-17 16:16:53,318 Verification error= 16.5%, loss= 0.5321
2018-02-17 16:16:59,722 Iter 100, Minibatch Loss= 0.3417, Training Accuracy= 0.8054, Minibatch error= 19.5%
2018-02-17 16:17:09,975 Iter 102, Minibatch Loss= 0.2324, Training Accuracy= 0.8694, Minibatch error= 13.1%
2018-02-17 16:17:20,029 Iter 104, Minibatch Loss= 0.2600, Training Accuracy= 0.7933, Minibatch error= 20.7%
2018-02-17 16:17:30,083 Iter 106, Minibatch Loss= 0.1581, Training Accuracy= 0.8642, Minibatch error= 13.6%
2018-02-17 16:17:40,223 Iter 108, Minibatch Loss= 0.1896, Training Accuracy= 0.8062, Minibatch error= 19.4%
2018-02-17 16:17:50,223 Iter 110, Minibatch Loss= 0.1861, Training Accuracy= 0.8376, Minibatch error= 16.2%
2018-02-17 16:18:00,309 Iter 112, Minibatch Loss= 0.2372, Training Accuracy= 0.8042, Minibatch error= 19.6%
2018-02-17 16:18:10,368 Iter 114, Minibatch Loss= 0.1969, Training Accuracy= 0.8329, Minibatch error= 16.7%
2018-02-17 16:18:20,425 Iter 116, Minibatch Loss= 0.2380, Training Accuracy= 0.8401, Minibatch error= 16.0%
2018-02-17 16:18:30,452 Iter 118, Minibatch Loss= 0.1678, Training Accuracy= 0.7778, Minibatch error= 22.2%
2018-02-17 16:18:34,811 Epoch 5, Average loss: 0.2234, learning rate: 0.1548
2018-02-17 16:18:43,215 Verification error= 16.5%, loss= 0.2059
2018-02-17 16:18:49,675 Iter 120, Minibatch Loss= 0.1340, Training Accuracy= 0.8295, Minibatch error= 17.1%
2018-02-17 16:18:59,708 Iter 122, Minibatch Loss= 0.1914, Training Accuracy= 0.8046, Minibatch error= 19.5%
2018-02-17 16:19:09,956 Iter 124, Minibatch Loss= 0.2527, Training Accuracy= 0.8124, Minibatch error= 18.8%
2018-02-17 16:19:20,055 Iter 126, Minibatch Loss= 0.1803, Training Accuracy= 0.8100, Minibatch error= 19.0%
2018-02-17 16:19:30,113 Iter 128, Minibatch Loss= 0.2609, Training Accuracy= 0.7837, Minibatch error= 21.6%
2018-02-17 16:19:40,261 Iter 130, Minibatch Loss= 0.6932, Training Accuracy= 0.8538, Minibatch error= 14.6%
2018-02-17 16:19:50,324 Iter 132, Minibatch Loss= 0.1227, Training Accuracy= 0.8647, Minibatch error= 13.5%
2018-02-17 16:20:00,375 Iter 134, Minibatch Loss= 0.1889, Training Accuracy= 0.8184, Minibatch error= 18.2%
2018-02-17 16:20:10,416 Iter 136, Minibatch Loss= 0.3092, Training Accuracy= 0.7955, Minibatch error= 20.4%
2018-02-17 16:20:20,473 Iter 138, Minibatch Loss= 0.2303, Training Accuracy= 0.8479, Minibatch error= 15.2%
2018-02-17 16:20:24,819 Epoch 6, Average loss: 0.2141, learning rate: 0.1470
2018-02-17 16:20:33,204 Verification error= 16.5%, loss= 0.2338
2018-02-17 16:20:39,662 Iter 140, Minibatch Loss= 0.1712, Training Accuracy= 0.8697, Minibatch error= 13.0%
2018-02-17 16:20:49,745 Iter 142, Minibatch Loss= 0.2454, Training Accuracy= 0.8364, Minibatch error= 16.4%
2018-02-17 16:20:59,755 Iter 144, Minibatch Loss= 0.2895, Training Accuracy= 0.8004, Minibatch error= 20.0%
2018-02-17 16:21:10,047 Iter 146, Minibatch Loss= 0.2241, Training Accuracy= 0.8319, Minibatch error= 16.8%
2018-02-17 16:21:20,148 Iter 148, Minibatch Loss= 0.2728, Training Accuracy= 0.8448, Minibatch error= 15.5%
2018-02-17 16:21:30,186 Iter 150, Minibatch Loss= 0.2217, Training Accuracy= 0.8307, Minibatch error= 16.9%
2018-02-17 16:21:40,353 Iter 152, Minibatch Loss= 0.1622, Training Accuracy= 0.8056, Minibatch error= 19.4%
2018-02-17 16:21:50,419 Iter 154, Minibatch Loss= 0.1893, Training Accuracy= 0.8424, Minibatch error= 15.8%
2018-02-17 16:22:00,488 Iter 156, Minibatch Loss= 0.1402, Training Accuracy= 0.8353, Minibatch error= 16.5%
2018-02-17 16:22:10,535 Iter 158, Minibatch Loss= 0.1416, Training Accuracy= 0.8765, Minibatch error= 12.3%
2018-02-17 16:22:14,898 Epoch 7, Average loss: 0.1985, learning rate: 0.1397
2018-02-17 16:22:23,278 Verification error= 16.5%, loss= 0.2564
2018-02-17 16:22:29,713 Iter 160, Minibatch Loss= 0.0946, Training Accuracy= 0.8723, Minibatch error= 12.8%
2018-02-17 16:22:39,708 Iter 162, Minibatch Loss= 0.1321, Training Accuracy= 0.8283, Minibatch error= 17.2%
2018-02-17 16:22:49,742 Iter 164, Minibatch Loss= 0.1560, Training Accuracy= 0.8365, Minibatch error= 16.4%
2018-02-17 16:22:59,786 Iter 166, Minibatch Loss= 0.1911, Training Accuracy= 0.8233, Minibatch error= 17.7%
2018-02-17 16:23:10,051 Iter 168, Minibatch Loss= 0.1431, Training Accuracy= 0.8740, Minibatch error= 12.6%
2018-02-17 16:23:20,368 Iter 170, Minibatch Loss= 0.2040, Training Accuracy= 0.8808, Minibatch error= 11.9%
2018-02-17 16:23:30,376 Iter 172, Minibatch Loss= 0.2205, Training Accuracy= 0.8335, Minibatch error= 16.7%
2018-02-17 16:23:40,514 Iter 174, Minibatch Loss= 0.2065, Training Accuracy= 0.8657, Minibatch error= 13.4%
2018-02-17 16:23:50,566 Iter 176, Minibatch Loss= 0.1374, Training Accuracy= 0.8121, Minibatch error= 18.8%
2018-02-17 16:24:00,611 Iter 178, Minibatch Loss= 0.2133, Training Accuracy= 0.8678, Minibatch error= 13.2%
2018-02-17 16:24:04,956 Epoch 8, Average loss: 0.1679, learning rate: 0.1327
2018-02-17 16:24:13,371 Verification error= 16.5%, loss= 0.2718
2018-02-17 16:24:19,855 Iter 180, Minibatch Loss= 0.1631, Training Accuracy= 0.8460, Minibatch error= 15.4%
2018-02-17 16:24:29,827 Iter 182, Minibatch Loss= 0.1274, Training Accuracy= 0.8441, Minibatch error= 15.6%
2018-02-17 16:24:39,898 Iter 184, Minibatch Loss= 0.2209, Training Accuracy= 0.8573, Minibatch error= 14.3%
2018-02-17 16:24:49,916 Iter 186, Minibatch Loss= 0.1760, Training Accuracy= 0.8360, Minibatch error= 16.4%
2018-02-17 16:24:59,949 Iter 188, Minibatch Loss= 0.2374, Training Accuracy= 0.7989, Minibatch error= 20.1%
2018-02-17 16:25:10,250 Iter 190, Minibatch Loss= 0.4674, Training Accuracy= 0.8011, Minibatch error= 19.9%
2018-02-17 16:25:20,328 Iter 192, Minibatch Loss= 0.3394, Training Accuracy= 0.8006, Minibatch error= 19.9%
2018-02-17 16:25:30,398 Iter 194, Minibatch Loss= 0.2951, Training Accuracy= 0.7930, Minibatch error= 20.7%
2018-02-17 16:25:40,524 Iter 196, Minibatch Loss= 0.3106, Training Accuracy= 0.8065, Minibatch error= 19.3%
2018-02-17 16:25:50,562 Iter 198, Minibatch Loss= 0.1109, Training Accuracy= 0.8639, Minibatch error= 13.6%
2018-02-17 16:25:54,897 Epoch 9, Average loss: 0.2191, learning rate: 0.1260
2018-02-17 16:26:03,244 Verification error= 16.5%, loss= 0.2461
2018-02-17 16:26:04,061 Optimization Finished!

Every epoch the network executes a test on four images, to test the accurary. You can see the network knows when two circles are overlap. If you see first road third column image is the prediction, one of the circle is not completed but you can see is not part of the circle above; this seems greate we may not need to use two separate networks.

In [37]:
# Load the epoch 9 prediction
plt.figure(figsize=(30,30))
pred = mpimg.imread('/home/gaure/Google_Drive/kaggle/training_set/prediction/epoch_9.jpg')
predimg = plt.imshow(pred)
plt.show()
In [ ]: